23 research outputs found

    Solving the k-center Problem Efficiently with a Dominating Set Algorithm

    Get PDF
    We present a polynomial time heuristic algorithm for the minimum dominating set problem. The algorithm can readily be used for solving the minimum alpha-all-neighbor dominating set problem and the minimum set cover problem. We apply the algorithm in heuristic solving the minimum k-center problem in polynomial time. Using a standard set of 40 test problems we experimentally show that our k-center algorithm performs much better than other well-known heuristics and is competitive with the best known (non-polynomial time) algorithms for solving the k-center problem in terms of average quality and deviation of the results as well as execution time

    EXPERIMENTAL COMPARISON OF MATRIX ALGORITHMS FOR DATAFLOW COMPUTER ARCHITECTURE

    Get PDF
    In this paper we draw our attention to several algorithms for the dataflow computer paradigm, where the dataflow computation is used to augment the classical control-flow computation and, hence, strives to obtain an accelerated algorithm. Our main goal is to experimentally explore various dataflow techniques and features, which enable such an acceleration. Our focus is to resolve one of the most important challenges when designing a dataflow algorithm, which is to determine the best possible data choreography in the given context. In order to mitigate this challenge, we systematically enumerate and present possible techniques of various data choreographies. In particular, we focus our interest on the algorithms that use matrices and vectors as the underlaying data structure. We begin with simple algorithms such as matrix and vector multiplication, evaluation of polynomials as well as more advanced ones such as the simplex algorithm for solving linear programs. To evaluate the algorithms we compare their running-times as well as the dataflow resource consumption

    SEARCH-TREE SIZE ESTIMATION FOR THE SUBGRAPH ISOMORPHISM PROBLEM

    Get PDF
    This article addresses the problem of finding patterns in graphs. This is formally defined as the subgraph isomorphism problem and is one of the core problems in theoretical computer science. We consider the counting variation of this problem. The task is to count all instances of the pattern G occurring in a (usually larger) graph H. The vast majority of algorithms for this problem use a variation of backtracking. Most commonly they exhaustively search through the space of all possible monomorphisms between G and H. The size of the search tree depends heavily on the choice of the ordering of vertices of G, which are systematically assigned to the vertices of H. We use a method called heuristic sampling to estimate the size of the search tree for each ordering in advance. We use this estimation to select the most suitable order of vertices of G which minimizes the expected tree size. This approach is empirically evaluated on a set of instances, showing the practical potential of the method

    Flexibility in optimization problems

    Get PDF
    Abstract – Flexibility in optimization problems The thesis Flexibility in optimization problems examines optimization problems in which one must find several different solutions which exhibit a certain degree of mutual resemblance. Such problems are called flexibility problems while their solutions are said to be flexible. There is a multitude of optimization problems in practice. Therefore, their close examination is very important both from practical point of view as well as for scientific progress. However, uncertainty of input data is often present in practical optimization problems. A datum is uncertain if its exact value is not known. There are several approaches to dealing with uncertainty of input data. In addition to robustness and stochastics, it turns out that the flexibility is also a possible approach to dealing with uncertainty of input data in optimization problems. The uncertainty of data can be modeled with scenarios. In general, a scenario represents an instance of an optimization problem while an instance of a flexibility optimization problem is represented by a set of scenarios. Discrepancies among the scenarios are often bounded. For this purpose, a method of perturbating scenarios can be applied. We tackle various flexibility optimization problems mainly from the point of view of computational complexity theory and algorithm analysis. Hence, notions such as Karp and Turing reducibility, complexity classes of decision or optimization problems, heuristic and approximation algorithms are often used. For defining and solving problems we mainly use notions from graph theory and special problem areas, such as network flows and matching problems. Flexibility is a very general idea and is, therefore, in one way or another already present in some optimization problems, for example in real-time systems, online algorithms, dynamic data, mobility of facilities in location problems etc. For this reason, we formally define the notion of flexibility and other related notions, and subsequently describe a general method of introducing the flexibility into optimization problems. An instance of a flexibility optimization problem consists of a graph G=(V,E) of scenario transitions where to each i in V a scenario si is assigned, describing some set of input data. For each scenario si the particular solution Si is found in such a way that its quality z(Si ) is either optimal or approximate in case of an NP-hard problem. Furthermore, a way of calculating the particular flexibility fij (Si , Sj) for all (i,j) in E is assumed to be given. The objective function of the problem is composed of these particular flexibilities, such as max{ fij | (i,j) in E}. The goal of flexibility problems is usually minimization. Other kinds of flexibility problems are similarly defined. The main part of the thesis is focused on various flexible-attribute problems. Here we study the optimization of complete flexibility and concentrate on simple description of scenarios via sets of attributes. We define several versions of the problem. We examine their computational complexity in terms of time consumption, we prove their NP-hardness, and find positive and negative results about their absolute and relative approximability. Due to their intractability, we treat various simplified versions of the problems. In particular, we analyze flexible-attribute problems on trees, general graphs, chains, and two-stage graphs. For all these problems we design and analyze algorithms for constructing optimal or approximate solutions. Next, we discuss introducing the flexibility in some familiar optimization problems such as locating centers and the minimum spanning tree. For all flexibility versions of these problems we prove \NP-hardness. Furthermore, for all these problems we design either exact or heuristic algorithms. The final chapter contains original contributions to the science, open questions, and ideas for further research in the field. Keywords: optimization problem, flexibility, uncertainty, scenario, computational complexity theory, heuristic and approximation algorithm

    Mednarodni gozdarski sejem ELMIA WOOD 2017

    Get PDF

    An experimental evaluation of refinement techniques for the subgraph isomorphism backtracking algorithms

    No full text
    In this paper, we study a well-known computationally hard problem, called the subgraph isomorphism problem where the goal is for a given pattern and target graphs to determine whether the pattern is a subgraph of the target graph. Numerous algorithms for solving the problem exist in the literature and most of them are based on the backtracking approach. Since straightforward backtracking is usually slow, many algorithmic refinement techniques are used in practical algorithms. The main goal of this paper is to study such refinement techniques and to determine their ability to speed up backtracking algorithms. To do this we use a methodology of experimental algorithmics. We perform an experimental evaluation of the techniques and their combinations and, hence, demonstrate their usefulness in practice

    Graph automorphisms for compression

    No full text
    Detecting automorphisms is a natural way to identify redundant information presented in structured data. When such redundancies are detected they can be used for data compression. In this paper we explore two different classes of graphs to capture this intuitive property of automorphisms. Symmetry-compressible graphs are the first class which introduces the basic concepts but use only global symmetries for the compression. In order for this concept to be more practical, we need to use local symmetries. Thus, we extend the basic graph class with Near Symmetry compressible graphs. Furthermore, we develop two algorithms that can be used to compress practical instances and empirically evaluate them on a set of realistic graphs
    corecore